Machine learning is the dominant approach to artificial intelligence, through which computers learn from data and experience. In the framework of supervised learning, for a computer to learn from data accurately and efficiently, some auxiliary information about the data distribution and target function should be provided to it through the learning model. This notion of auxiliary information relates to the concept of regularization in statistical learning theory. A common feature among real-world datasets is that data domains are multiscale and target functions are well-behaved and smooth. In this paper, we propose a learning model that exploits this multiscale data structure and discuss its statistical and computational benefits. The hierarchical learning model is inspired by the logical and progressive easy-to-hard learning mechanism of human beings and has interpretable levels. The model apportions computational resources according to the complexity of data instances and target functions. This property can have multiple benefits, including higher inference speed and computational savings in training a model for many users or when training is interrupted. We provide a statistical analysis of the learning mechanism using multiscale entropies and show that it can yield significantly stronger guarantees than uniform convergence bounds.
translated by 谷歌翻译
Mixture of factor analyzer (MFA) model is an efficient model for the analysis of high dimensional data through which the factor-analyzer technique based on the covariance matrices reducing the number of free parameters. The model also provides an important methodology to determine latent groups in data. There are several pieces of research to extend the model based on the asymmetrical and/or with outlier datasets with some known computational limitations that have been examined in frequentist cases. In this paper, an MFA model with a rich and flexible class of skew normal (unrestricted) generalized hyperbolic (called SUNGH) distributions along with a Bayesian structure with several computational benefits have been introduced. The SUNGH family provides considerable flexibility to model skewness in different directions as well as allowing for heavy tailed data. There are several desirable properties in the structure of the SUNGH family, including, an analytically flexible density which leads to easing up the computation applied for the estimation of parameters. Considering factor analysis models, the SUNGH family also allows for skewness and heavy tails for both the error component and factor scores. In the present study, the advantages of using this family of distributions have been discussed and the suitable efficiency of the introduced MFA model using real data examples and simulation has been demonstrated.
translated by 谷歌翻译
近年来,文本发现的主要范例是将文本检测和识别的任务结合到一个端到端的框架中。在此范式下,这两个任务都是通过从输入图像中提取的共享全局特征图操作来完成的。端到端方法面临的主要挑战之一是识别跨音阶变化(较小或较大的文本)和任意单词旋转角的文本时的性能退化。在这项工作中,我们通过提出一种新型的全球到本地关注机制来解决这些挑战,用于文本斑点,称为玻璃,将全球和本地特征融合在一起。全局功能是从共享骨干线中提取的,从整个图像中保留上下文信息,而本地功能则在调整大小的高分辨率旋转的单词作物上单独计算。从当地农作物中提取的信息减轻了尺度和单词旋转的许多固有困难。我们显示了跨音阶和角度的性能分析,突出了尺度和角度的肢体的改善。此外,我们引入了一个方向感知的损失项,以监督检测任务,并显示其对所有角度的检测和识别性能的贡献。最后,我们通过将玻璃纳入其他领先的文本发现架构,改善其文本斑点性能来表明玻璃是一般的。我们的方法在包括新发布的Textocr在内的多个基准上实现了最新的结果。
translated by 谷歌翻译
目的:随着具有非传统电极配置的可穿戴睡眠监测设备的快速升高,需要自动算法,可以在具有少量标记数据的配置上执行睡眠暂存。转移学习具有从源模态(例如标准电极配置)到新的目标模态(例如非传统电极配置)的神经网络权重。方法:我们提出功能匹配,一个新的转移学习策略作为常用的芬降方法的替代方案。该方法包括培训具有来自源模态的大量数据的模型,以及源头和目标模态的成对样本很少。对于那些配对的样本,模型提取目标模态的特征,与来自源模态的相应样本相匹配。结果:我们将特征与三种不同的目标域的FineTuning进行比较,具有两个不同的神经网络架构,以及不同数量的培训数据。特别是在小型队列(即,在非传统的记录设置中标记的记录)上,具有系统地匹配的特征,具有平均相对差异的精度为不同场景和数据集的0.4%至4.7%。结论:我们的研究结果表明,特征符合FineTuning作为转移学习方法的特征,特别是在非常低的数据制度中。意义:因此,我们得出结论,特征匹配是具有新颖设备可穿戴睡眠分段的有希望的新方法。
translated by 谷歌翻译
社交媒体通常在选举活动中被公众使用,以表达他们对不同问题的看法。在各种社交媒体渠道中,Twitter为研究人员和政客提供了一个有效的平台,以探索有关经济和外交政策等广泛主题的公众舆论。当前的文献主要集中于分析推文的内容而无需考虑用户的性别。这项研究收集和分析了大量推文,并使用计算,人类编码和统计分析来识别2020年美国总统选举期间发布的300,000多个推文中的主题。我们的发现是基于广泛的主题,例如税收,气候变化和Covid-19-19。在主题中,女性和男性用户之间存在着显着差异,超过70%的主题。
translated by 谷歌翻译
我们分析了含有100,000个补丁的结直肠癌(CRC)组织病理学数据集的离线和在线三胞胎挖掘的效果。我们认为在线和离线采矿中,极端,即与给定锚的最远和最近的补丁。尽管许多工作仅着眼于在线选择三胞胎(批次),但我们还研究了以离线方式训练之前的极端距离和邻居补丁的效果。我们分析了极端案例的嵌入离线距离与在线采矿的影响,包括易于正面的,批处理半硬度,批处理硬线挖掘,邻里组件分析损失,其代理版本和距离加权采样。我们还根据极端距离进行了在线方法,并根据数据模式进行了全面比较离线和在线挖掘绩效,并将离线挖掘解释为具有大型迷你批量大小的在线挖掘的可拖延概括。同样,我们讨论了不同结直肠组织类型的关系。我们发现,离线和在线挖掘方法在本研究中具有可比的特定体系结构(例如RESNET-18)具有可比性的性能。此外,我们发现包括不同的极端距离在内的各种情况是有希望的,尤其是在在线方法中。
translated by 谷歌翻译
Pennylane是用于量子计算机可区分编程的Python 3软件框架。该库为近期量子计算设备提供了统一的体系结构,支持量子和连续变化的范例。 Pennylane的核心特征是能够以与经典技术(例如反向传播)兼容的方式来计算变异量子电路的梯度。因此,Pennylane扩展了在优化和机器学习中常见的自动分化算法,以包括量子和混合计算。插件系统使该框架与任何基于门的量子模拟器或硬件兼容。我们为硬件提供商提供插件,包括Xanadu Cloud,Amazon Braket和IBM Quantum,允许Pennylane优化在公开访问的量子设备上运行。在古典方面,Pennylane与加速的机器学习库(例如Tensorflow,Pytorch,Jax和Autograd)接口。 Pennylane可用于优化变分的量子本素体,量子近似优化,量子机学习模型和许多其他应用。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure, meaning that certain parts of the state space move "fast" (and in a sense, are more influential) while other parts transition more "slowly." Such structure is common in real-world problems where sequential decisions need to be made at high frequencies, yet information that varies at a slower timescale also influences the optimal policy. Examples include: (1) service allocation for a multi-class queue with (slowly varying) stochastic costs, (2) a restless multi-armed bandit with an environmental state, and (3) energy demand response, where both day-ahead and real-time prices play a role in the firm's revenue. Models that fully capture these problems often result in MDPs with large state spaces and large effective time horizons (due to frequent decisions), rendering them computationally intractable. We propose an approximate dynamic programming algorithmic framework based on the idea of "freezing" the slow states, solving a set of simpler finite-horizon MDPs (the lower-level MDPs), and applying value iteration (VI) to an auxiliary MDP that transitions on a slower timescale (the upper-level MDP). We also extend the technique to a function approximation setting, where a feature-based linear architecture is used. On the theoretical side, we analyze the regret incurred by each variant of our frozen-state approach. Finally, we give empirical evidence that the frozen-state approach generates effective policies using just a fraction of the computational cost, while illustrating that simply omitting slow states from the decision modeling is often not a viable heuristic.
translated by 谷歌翻译